🌻 ! gemini bites 1

24 Jan 2026

**

Non-mosquitoes do not cause non-mosquito bites#

The project of understanding human belief systems through causal mapping is an essential endeavor for democratic inquiry. In the effort to make sense of interview data from hundreds or thousands of individuals, the researcher is not merely categorizing responses but is attempting to reconstruct the "naïve metaphysics" that citizens use to navigate social and political reality. A fundamental tension exists, however, between the formalisms used to represent these causal claims and the ontological status of the factors being mapped. On one hand, the dominant quantitative traditions of systems dynamics and econometrics treat causal factors as numerical variables—dimensions of variation that move within a specified range, often $[0,1]$ or [-1, 1]. On the other hand, the ways in which people actually speak about the world in ordinary language suggest an ontology of "primitive" boxes or propositional events, where causation is viewed not as a correlation between variables but as the triggering of inherent causal powers within specific entities and configurations.

The choice of formalism is not a neutral technical decision; it carries deep ontological commitments that can either illuminate or obscure the actual logic of human sense-making. If the goal is to capture how people think the world works without over-specifying the underlying logic, it is necessary to move toward a representation that avoids the symmetry and continuity of variable-based models while steering clear of the over-specification found in the Boolean logics of necessity and sufficiency.

The most pervasive tradition in causal modeling is rooted in the "variable-centric" approach, where the world is conceived as a set of interacting variables joined by causal connections with polarity and strength. In this framework, a causal factor is defined by its ability to vary. To say that "poverty causes crime" in a variable-based model is to imply a relationship between two dimensions: the level of poverty (X) and the rate of crime (Y). This relationship is typically expressed through a coefficient that dictates how a change in X results in a change in Y.

While mathematically powerful for prediction and simulation, this approach is often a "very big stretch" from ordinary language. When an individual states that "the closure of the local factory led to a decline in community spirit," they are not necessarily describing a continuous relationship between "Factory Operational Status" and "Spirit Levels". They are describing a discrete, particular event—the closure—and its productive power to bring about another state.

The Mosquito Bite Fallacy#

The ontological mismatch between variables and ordinary language is best illustrated by Michael Scriven’s critique involving the relationship between mosquitoes and mosquito bites. In a variable-based notation, if "Mosquito Presence" is a variable (M) and "Bite Presence" is another (B), the claim that "mosquitoes cause mosquito bites" is often modeled as a positive correlation or a functional dependency. The formal logic of variables, however, treats the absence of the cause and the absence of the effect as being just as informative as their presence. Thus, the statement "M causes B" is logically equivalent to "\neg M causes \neg B," or "non-mosquitoes cause non-mosquito bites".

This equivalence is, as Scriven notes, nonsense. In the real world, the absence of a mosquito is a background condition, not a cause of the "non-bite." Causal mapping traditions that use variables frequently fail to distinguish between the two, whereas a realist approach would record "mosquitoes cause bites" as a specific propositional claim. If an interviewee never mentions "non-mosquitoes," a realist map would never record such a factor. The variable-based interpretation forces a "sample space" onto the data that the participants themselves did not construct, thereby misrepresenting the structure of their causal reasoning.

Feature Variable-Based Models Realist Causal Mapping
Ontological Unit Dimension of Variation (Variable) Propositional Event / Entity
Causal Relation Functional Dependency / Coefficient Triggering of Causal Power
Symmetry Inherent (Presence and Absence are linked) Asymmetric (Production is distinct from omission)
Data Source Measurements / Quantitative Indicators Narratives / Interview Claims
Logic Correlation / Probabilistic Dependency Generative Mechanism / Action

The tradition of cognitive mapping, as pioneered by Robert Axelrod in "The Structure of Decision," represents a smaller, more qualitative branch of this discourse. Axelrod sought to map the belief systems of political elites by translating texts into simple assertions represented as a directed graph. In this notation, nodes represent concepts and arrows represent causal influence, usually marked with a plus (+) for positive relationships or a minus (-) for negative ones.

Despite its qualitative appearance, Axelrod’s tradition still largely conceives of these concepts as "variables." A plus sign attached to an arrow from "Economic Stability" to "Public Satisfaction" implies that "changes occur in the same direction". While this is a simplification of the complex differential equations found in systems dynamics, it remains anchored in the logic of "more of X leads to more of Y." This still fails to capture the "primitive" nature of causal claims where the cause is an event or a "chunk" of information rather than a point on a scale.

Axelrod's focus was on the structure of beliefs and the consistency of decision-making. He was less concerned with the deep ontology of what a "cause" is and more with how a set of stated beliefs could generate consequences for policy. This "epistemic" approach to mapping—treating the map as a representation of a person's knowledge—often ignores the "ontic" question of how the world is actually perceived to work in virtue of the powers of the entities involved.

A significant advancement in the academic literature regarding the ontology of causal factors comes from the "New Mechanism" movement, particularly the work of Stuart Glennan. Glennan argues that the "Big Deal" about mechanism is its potential to solve thorny philosophical problems by reimagining scientific ontology. He defines a "minimal mechanism" for a phenomenon as a collection of entities (or parts) whose activities and interactions are organized in such a way that they produce the phenomenon.

This framework moves significantly closer to the "realist" causal mapping requested by the user. Glennan makes several distinctions that are critical for a formalism representing ordinary language:

  • Particulars vs. Regularities: Unlike traditional views that see mechanisms as necessarily recurrent or regular, Glennan’s "minimal mechanism" allows for mechanisms responsible for phenomena that do not recur. This allows for the representation of "one-off" mechanisms—causal chains that occur only once. This matches the way interviewees often describe unique historical or personal events as causes.

  • Entities and Activities: Glennan’s ontology includes both "entities" (objects with properties) and "activities" (what those entities do). This aligns with the "box leading to another box" intuition, where the boxes are entities and the arrows are the activities that constitute the causal link.

  • Production over Relevance: Glennan argues that "causal production" is more fundamental than "causal relevance". A relevance claim (e.g., "X is relevant to Y") is a comparative claim about possible mechanisms, whereas a production claim (e.g., "X produced Y") is an account of the actual mechanism in operation.

In Glennan’s view, causes and effects must be connected by mechanisms; if one event causes a second, there exists a mechanism by which the first event contributes to the production of the second. This "mechanistic theory of causation" explains why a cause has the power to affect an effect—it is in virtue of the internal organization and activities of the mechanism’s parts.

Component Definition in Glennan's Minimal Mechanism Application to Causal Mapping
Entity A component part with specific properties. The "subject" node in a map (e.g., "The Bank").
Activity The productive interaction or "doing." The "causal link" or arrow (e.g., "Lent money").
Organization The spatial/temporal arrangement of parts. The structure/topology of the map.
Phenomenon The behavior or outcome produced. The "effect" node (e.g., "The business grew").
Mechanism The system of entities and activities. The entire causal chain or "nugget."

The user's preference for a "realist" approach is strongly supported by the "causal powers realism" found in the works of philosophers like Roy Bhaskar and Rom Harré. This tradition argues that causation is not a matter of constant conjunction or correlation between variables but is the "triggering" or "instantiation" of a power that an object or configuration possesses in virtue of its nature.

Critical realism identifies a "layered" reality consisting of the Empirical (what we experience), the Actual (the events that occur), and the Real (the underlying structures and powers). Empiricist traditions, which rely on statistics and variable-based models, are often trapped in the "empirical" or "actual" domains, looking for linear relationships between observed variables. Realists, however, seek to abstract the underlying "causal mechanisms" or "causal powers" that produce these observations.

The Fallacy of Actualism#

Bhaskar identifies a specific error known as the "fallacy of actualism," where the powers, tendencies, and mechanisms of objects are simplified to their exercise in specific events. This leads to a scenario where "undifferentiated events" are treated as the only objects of science, ignoring the deep structures that make those events possible.

In the context of causal mapping, treating factors as "undifferentiated events" represented as "propositions" allows the researcher to record the occurrence of a power being exercised. For example, "The sun warms the back" is not just a correlation between "Sun Position" and "Skin Temperature"; it is the sun emitting energy (exercising power) that raises the temperature of the skin. Causal powers are not reducible to counterfactuals; they are "directed towards potential manifestations"—flammable things are directed toward igniting, and people with fear are directed toward fearful actions.

This realist ontology suggests that the "factors" in a causal map should not be abstract variables but "causal nuggets" of information that describe an entity exercising a power to affect another. This avoids the symmetry of the variable because a power can exist without being triggered, and its non-triggering does not constitute a "negative cause" of a "negative effect" in the same way that a variable-based model implies.

The work of Gary Goertz represents a different challenge to the variable-centric model. Goertz argues that social science is rich in theories that imply "necessary conditions"—claims where the absence of a factor guarantees the failure of an outcome. For instance, "richness is sufficient for democracy" is a fundamentally different claim from "the richer a case, the more democratic it is". The former postulates an asymmetric set relation, while the latter assumes a continuous correlation.

Goertz developed "Necessary Condition Analysis" (NCA) to identify these "bottlenecks" or "constraints" in data. A necessary cause is like a "barrier" that must be managed to allow an outcome to exist; without it, there is "guaranteed failure" that cannot be compensated for by other factors. This multiplicative logic (Y = X_1 \times X_2 \times X_3) stands in stark contrast to the additive logic of multiple regression where factors can "compensate" for one another.

The Risk of Over-Specification#

However, as the user points out, Goertz’s approach may be "too overspecified" for the simple claims found in ordinary language. By taking causal factors as propositions subject to strict Boolean logic, Goertz gets "tangled up" in the complexities of equifinality (multiple paths to an outcome) and multifinality (one cause leading to many outcomes).

In the context of interview data, when a person says "good leadership was necessary for our success," they are rarely making a formal claim that success is logically impossible without leadership across all possible universes. They are describing the specific "power" of the leader in their narrative context. Forcing these claims into a formal necessity/sufficiency framework may introduce "additional complications" that are too complicated for the purpose of mapping how people think the world works.

Logic Type Representation Causal Intuition Limitation for Narratives
Additive Y = \beta_1 X_1 + \beta_2 X_2 Factors contribute and compensate. Fails to identify essential bottlenecks.
Multiplicative Y = X_1 \cdot X_2 \cdot X_3 All necessary factors must be present. Too rigid for messy, singular narratives.
Realist/Power A \xrightarrow{triggers} B An entity exercises power in context. Harder to quantify across large datasets.
Boolean X \implies Y Logical necessity and sufficiency. Over-specifies informal speech.

If we seek a formalism that captures the "primitive" intuition of "boxes leading to boxes" without the baggage of variables or strict Boolean logic, the literature on cognitive linguistics offers a compelling middle ground. Leonard Talmy’s theory of "Force Dynamics" and Peter Gärdenfors’ "Two-Vector Model of Events" provide a way to formally capture the status of causes and effects in ordinary language.

Leonard Talmy's Force Dynamics#

Talmy describes Force Dynamics as a semantic category that structures how language represents the interaction of forces, counterforces, and causal relations. He identifies two primary entities in any causal scene: the Agonist (the entity whose state is at issue) and the Antagonist (the entity that exerts force upon the Agonist).

Force Dynamics generalizes over the notion of "causative" and identifies several patterns:

  • Onset Causation: An Antagonist overcomes an Agonist’s tendency toward rest or motion (e.g., "The wind blew the door open").

  • Extended Causation: A continuous force maintains a state (e.g., "The pillar keeps the roof up").

  • Letting: An Antagonist that was previously blocking an Agonist is removed (e.g., "The drain let the water out").

  • Prevention: An Antagonist successfully blocks the Agonist’s tendency (e.g., "The log stopped the ball").

This framework is "primitive" in the sense that it does not require numerical variables. It uses "schematic abstractions" of embodied interactions to represent causal relations. For causal mapping, this means we can represent a link not just as a "+" or "-", but as a specific type of force interaction (causing, letting, or preventing) that corresponds directly to the verbs used in interviews.

The Two-Vector Model of Events#

Peter Gärdenfors formalizes these linguistic intuitions into a "Two-Vector Model of Events". He argues that human causal cognition is structured in terms of events, which are represented by two main components:

  1. The Force Vector: The cause, representing the energy or action applied by an agent or external force.

  2. The Result Vector: The effect, representing the change in the properties of the "patient" (the entity being acted upon).

This model moves causation "inside" the event; an event is a mapping from an action space (forces) to a result space (changes in properties). This model is explicitly designed to avoid the problems of Bayesian or probabilistic models. It respects three key mathematical properties that align with human qualitative thinking:

  • Monotonicity: Larger forces lead to larger results.

  • Continuity: Small changes in force lead to small changes in result.

  • Convexity: Intermediate forces lead to intermediate results, facilitating generalization and categorization.

Importantly, the Two-Vector model handles "what-if" (counterfactual) reasoning through simulations of force/counterforce vectors rather than just looking at joint probability distributions. It also provides a clear account of "omissive causation"—the removal of a force vector. This provides a formal way to treat factors as "events" or "propositions" (e.g., "Force F was applied to Object O") without reducing them to variables.

When processing interview data at scale, the map itself becomes a "knowledge graph" that aggregates these "causal nuggets". This approach treats the map not as a model of the physical world but as a representation of the "logic of evidence"—how the sources claim the system works.

In this framework, the formal status of causal factors is as follows:

  • Factors as Propositions: Each node is a discrete assertion (e.g., "The policy was implemented").

  • Links as Asserted Mechanisms: Each arrow is a claim that one proposition influenced another.

  • Aggregation through Transitivity: The fundamental property of these maps is transitivity—if Person A says X \rightarrow Y and Person B says Y \rightarrow Z, the researcher can infer a potential pathway X \rightarrow Z.

This avoids the "transitivity trap" and the "mosquito bite fallacy" because the map is purely additive. We only record the claims that are actually made. If an interviewee does not claim that "non-mosquitoes cause non-mosquito bites," that link is simply not in the graph. There is no hidden "sample space" of negative values unless they are explicitly articulated as "prevention" or "omission".

The Role of Causal Nuggets#

Causal mappers believe that humans are good at thinking in terms of "causal nuggets"—discrete, manageable pieces of causal information. Causal mapping is easier and more effective if the researcher is "realist about causation," accepting that the links represent actual productive powers being described by the participants. This approach is part of the "qualitative branch of the new causal revolution," emphasizing the narrative and mechanistic aspects of causal thinking over mere probabilistic knowledge.

Traditional Modeling (CBNs) Realist Causal Mapping (Nuggets)
Logic Probability / Do-Calculus
Factor Status Variable in a Partition
Completeness Requires full joint distribution
Inference

The academic literature suggests that the best way to "formally capture the status of causes and effects in ordinary language" without over-specifying the logic is to adopt a Propositional Mechanism Notation. This notation should be grounded in the "minimal mechanism" of Glennan and the "force dynamics" of Talmy.

Characteristics of a Propositional Mechanism Notation#

  1. Asymmetric Propositional Nodes: Each factor is a proposition describing an occurrent (an event, a state, or a process). It is not a variable that can be toggled to 0 or 1. If the event did not occur, it is not part of the specific causal chain being mapped.

  2. Productive Links (Activities): Arrows are not mere correlations; they are "activities" or "interactions". Each arrow should ideally carry a "verb" or a "causal power" description (e.g., "triggered," "blocked," "enabled").

  3. Context-Sensitivity: The "power" of a cause to produce an effect is dependent on the "organization" of the other factors. This matches the realist intuition that causation is an "instantiation of a causal power" in a specific configuration [User Query].

  4. Avoidance of Coefficients: Polarity (+/-) should be replaced by "Force Patterns" (causing vs. preventing). Strength should be represented by the "frequency of assertion" across the dataset (how many people believe this?) rather than a numerical coefficient. 5. Handling of Omission: Omissions and absences are treated as discrete propositional events (e.g., "The lack of rain") rather than as the zero-value of a "Rain Variable".

This formalism respects the "primitive" nature of causal claims while providing enough structure to analyze complexity across thousands of interviews. It satisfies the democratic need to know "how people really think the world works" because it allows them to define the entities and activities that populate their own mental maps, without forcing them into a pre-defined variable-based or Boolean-logic cage.

Beyond the immediate question of formalism, the literature suggests that causal mapping can reveal "emergent" phenomena in social systems. Causal emergence occurs when new causal laws or relationships arise at a higher level of abstraction that cannot be solely attributed to individual properties. In large-scale interview data, this means that while each individual "nugget" is a particular claim, the aggregate map may reveal systemic "causal characteristics" that no single interviewee fully perceived.

Furthermore, the use of "Longitudinal Qualitative Research" (LQR) in conjunction with causal mapping can uncover how these "meanings" and "trajectories" change over time. By tracking how the "force vectors" in a community's mental model shift—for example, moving from a focus on "government as a provider" to "government as a barrier"—researchers can identify the "critical moments" of social change.

The Interplay of Mechanism and Narrative#

Recent experimental work identifies three hallmarks of human causal reasoning that go beyond probabilistic knowledge: Mechanism, Narrative, and Mental Simulation. These are closely related; mental simulations are representations over time of mechanisms. This explains why people describe the world in terms of "boxes leading to boxes"—they are communicating a simulation of a mechanism.

A formalism that treats factors as "events" (propositions) and links as "activities" (mechanisms) is essentially a tool for capturing these mental simulations. By doing so, we move closer to a scientific understanding of the mind that is "intimately tied to actions" and the effects of intervention in both the actual and counterfactual worlds.

Level of Analysis Unit of Mapping Causal Focus
Individual Causal Nuggets / Simulations Personal Experience and Agency
Narrative Mechanisms and Activities How and Why events are linked
Aggregate Knowledge Graph / Emergence Systemic structures and collective belief
Philosophical Causal Powers and Realism The underlying "Real" domain

The academic literature on the ontology and formalism of causal factors confirms that the user's intuition is correct: the variable-based tradition is an over-specification that misrepresents ordinary language. The "mosquito bite" example highlights a fundamental ontological error in treating causation as a symmetrical correlation between dimensions of variation.

By contrast, the "New Mechanism" of Glennan and the "Critical Realism" of Bhaskar provide a robust foundation for a Propositional Mechanism Notation. This approach treats factors as discrete events or occurrents—entities exercising their inherent causal powers to produce specific results. By using the "Force Dynamics" of Talmy and the "Two-Vector Model" of Gärdenfors, we can formally represent these claims as asymmetric mappings from force to result, avoiding the nonsense of "non-mosquitoes" while capturing the qualitative nuance of human thought.

Ultimately, the goal of causal mapping in a democracy is to provide a "model you can query" to understand the diverse narratives that shape our world. This requires a formalism that is as simple as a box leading to a box, but as deep as the powers and mechanisms that those boxes represent. By reclaiming the "primitive" status of the causal factor, we allow the voices of thousands of people to speak for themselves, describing the world as they truly believe it to work.

Works cited#

  1. Causal Models: How People Think About the World and Its Alternatives, https://books.google.com/books/about/Causal_Models.html?hl=ms&id=JvlIEOLuPlgC 2. Causal Models: How People Think About the World and Its Alternatives - Steven Sloman, https://books.google.ps/books/about/Causal_Models.html?hl=ar&id=d9GTO0rl58QC&utm_source=gb-gplus-shareCausal 3. Cognitive Maps, https://web.itu.edu.tr/topcuil/ya/MDM02xCognitiveMaps.pdf 4. Structure of decision: The cognitive maps of political elites - ResearchGate, https://www.researchgate.net/publication/316062769_Structure_of_decision_The_cognitive_maps_of_political_elites 5. New Ontological Foundations for Extended Minds: Causal Powers Realism - charles lassiter, philosophy, gonzaga university, https://www.charleslassiter.net/uploads/3/0/1/2/30128463/causal_powers_realism_and_4e_theories_phenom_and_cog_sci.pdf 6. Critical Realism and Qualitative Research methods, https://bura.brunel.ac.uk/bitstream/2438/18209/1/FullText.doc 7. An Ontology Design Pattern for Representing Causality - Scholar Commons, https://scholarcommons.sc.edu/cgi/viewcontent.cgi?article=1615&context=aii_fac_pub 8. The longitudinal qualitative research design in nursing, health, and social care research: philosophy, methodology, and methods - PMC, https://pmc.ncbi.nlm.nih.gov/articles/PMC12777258/ 9. Necessary Condition Analysis (NCA): Logic and methodology of ..., https://repub.eur.nl/pub/77890/ERS-2015-004-LIS.pdf 10. Theoriebasierte Evaluation: Entwicklung und Anwendung eines Verfahrensmodells zur Programmtheoriekonstruktion [1. Aufl. 2020] 978-3-658-27578-5, 978-3-658-27579-2 - DOKUMEN.PUB, https://dokumen.pub/theoriebasierte-evaluation-entwicklung-und-anwendung-eines-verfahrensmodells-zur-programmtheoriekonstruktion-1-aufl-2020-978-3-658-27578-5-978-3-658-27579-2.html 11. Why: A Guide to Finding and Using Causes - National Academic Digital Library of Ethiopia, http://ndl.ethernet.edu.et/bitstream/123456789/17879/1/491.pdf 12. (PDF) Causal Mapping Garden - ResearchGate, https://www.researchgate.net/publication/400132575_Causal_Mapping_Garden 13. Structure of decision, the cognitive maps of political elites - VVSOR, https://www.vvsor.nl/wp-content/uploads/2020/06/MDN1979010011.pdf 14. Unifying the Cognitive-Map and Operational-Code Approaches: An Integrated Framework with an Illustrative Example, https://www.robertcutler.org/download/html/ch82cj.html 15. Review of Stuart Glennan, The New Mechanical Philosophy, https://www.cambridge.org/core/journals/philosophy-of-science/article/review-of-stuart-glennan-the-new-mechanical-philosophy/529FE3A6D8E3C9A5977C59152A030A68 16. The New Mechanical Philosophy | Reviews | Notre Dame ..., https://ndpr.nd.edu/reviews/the-new-mechanical-philosophy/ 17. (PDF) On mechanisms, pathways and their models - ResearchGate, https://www.researchgate.net/publication/395766474_On_mechanisms_pathways_and_their_models 18. Mechanisms, Causation and Laws (Chapter 5) - Cambridge University Press, https://www.cambridge.org/core/books/mechanisms-in-science/mechanisms-causation-and-laws/6CC7D1E4B5AB72311E809F3E2EE973C9 19. EXPLAINING CONTINUITY AND CHANGE IN REGIONAL FOREIGN POLICY OF SAUDI ARABIA A THESIS SUBM - Middle East Technical University, https://open.metu.edu.tr/bitstream/handle/11511/118501/10785169.pdf 20. Finite State Machines and Social Theory — InCognito - James L Caton, https://jameslcaton.com/InCognito/finite_state_machines_and_social_theory.html 21. From Covariation to Causation: A Causal Power Theory - UCLA Reasoning Lab, https://reasoninglab.psych.ucla.edu/wp-content/uploads/sites/273/2021/04/Cheng1.PR_.1997.pdf 22. Necessary Conditions: Theory, Methodology, and Applications. Edited by Gary Goertz and Harvey Starr. (Rowman and Littlefield, 20, https://energy.ceu.edu/sites/default/files/publications/review-goertz-jop-2008.pdf 23. The Methodology of Necessary Conditions | Request PDF - ResearchGate, https://www.researchgate.net/publication/271406978_The_Methodology_of_Necessary_Conditions 24. Force Dynamics - ResearchGate, https://www.researchgate.net/publication/287588774_Force_Dynamics 25. Force Dynamics in Language and Cognition | Semantic Scholar, https://www.semanticscholar.org/paper/Force-Dynamics-in-Language-and-Cognition-Talmy/ad174dce3323869d32b06eb3a7779fb255ce2839 26. Force Dynamics in Language and Cognition - Semioticon, https://semioticon.com/sio/wp-content/uploads/sites/4/2023/09/04-dynamical-models.pdf 27. Force dynamics and Greek prepositions – Koine-Greek, https://koine-greek.com/2025/11/06/force-dynamics-and-greek-prepositions/ 28. A Causal Model Theory of the Meaning of Cause , Enable , and Prevent - ResearchGate, https://www.researchgate.net/publication/51139721_A_Causal_Model_Theory_of_the_Meaning_of_Cause_Enable_and_Prevent 29. Events and Causal Mappings Modeled in Conceptual Spaces - PMC, https://pmc.ncbi.nlm.nih.gov/articles/PMC7179668/ 30. Events and Causal Mappings Modeled in Conceptual ... - Frontiers, https://www.frontiersin.org/journals/psychology/articles/10.3389/fpsyg.2020.00630/full 31. A two-vector model of events Peter Gärdenfors & Massimo Warglien, https://www.mpi.nl/sites/default/files/2019-04/Gardenfors_EvRep.pdf 32. Events and Causal Mappings Modeled in Conceptual Spaces - PubMed, https://pubmed.ncbi.nlm.nih.gov/32373016/ 33. A framework for representing action meaning in artificial systems via force dimensions - SciSpace, https://scispace.com/pdf/a-framework-for-representing-action-meaning-in-artificial-4uix2r7gpf.pdf 34. (PDF) Causality in Thought (2015) | Steven A. Sloman | 194 Citations - SciSpace, https://scispace.com/papers/causality-in-thought-57i4ros40n 35. Mechanisms in Science - Stanford Encyclopedia of Philosophy, https://plato.stanford.edu/entries/science-mechanisms/ 36. Emergence and Causality in Complex Systems: A Survey of Causal Emergence and Related Quantitative Studies - PMC, https://pmc.ncbi.nlm.nih.gov/articles/PMC10887681/

**